Goto

Collaborating Authors

 limit order


KANFormer for Predicting Fill Probabilities via Survival Analysis in Limit Order Books

Zhong, Jinfeng, Bacry, Emmanuel, Guilloux, Agathe, Muzy, Jean-François

arXiv.org Artificial Intelligence

This paper introduces KANFormer, a novel deep-learning-based model for predicting the time-to-fill of limit orders by leveraging both market- and agent-level information. KANFormer combines a Dilated Causal Convolutional network with a Transformer encoder, enhanced by Kolmogorov-Arnold Networks (KANs), which improve nonlinear approximation. Unlike existing models that rely solely on a series of snapshots of the limit order book, KANFormer integrates the actions of agents related to LOB dynamics and the position of the order in the queue to more effectively capture patterns related to execution likelihood. We evaluate the model using CAC 40 index futures data with labeled orders. The results show that KANFormer outperforms existing works in both calibration (Right-Censored Log-Likelihood, Integrated Brier Score) and discrimination (C-index, time-dependent AUC). We further analyze feature importance over time using SHAP (SHapley Additive exPlanations). Our results highlight the benefits of combining rich market signals with expressive neural architectures to achieve accurate and interpretabl predictions of fill probabilities.


TABL-ABM: A Hybrid Framework for Synthetic LOB Generation

Olby, Ollie, Baggott, Rory, Stillman, Namid

arXiv.org Artificial Intelligence

The recent application of deep learning models to financial trading has heightened the need for high fidelity financial time series data. This synthetic data can be used to supplement historical data to train large trading models. The state-of-the-art models for the generative application often rely on huge amounts of historical data and large, complicated models. These models range from autoregres-sive and diffusion-based models through to architecturally simpler models such as the temporal-attention bilinear layer. Agent-based approaches to modelling limit order book dynamics can also recreate trading activity through mechanistic models of trader behaviours. In this work, we demonstrate how a popular agent-based framework for simulating intraday trading activity, the Chiarella model, can be combined with one of the most performant deep learning models for forecasting multi-variate time series, the T ABL model. This forecasting model is coupled to a simulation of a matching engine with a novel method for simulating deleted order flow. Our simulator gives us the ability to test the generative abilities of the forecasting model using stylised facts. Our results show that this methodology generates realistic price dynamics however, when analysing deeper, parts of the markets microstructure are not accurately recreated, highlighting the necessity for including more sophisticated agent behaviors into the modeling framework to help account for tail events.


Right Place, Right Time: Market Simulation-based RL for Execution Optimisation

Olby, Ollie, Bacalum, Andreea, Baggott, Rory, Stillman, Namid

arXiv.org Artificial Intelligence

Execution algorithms are vital to modern trading, they enable market participants to execute large orders while minimising market impact and transaction costs. As these algorithms grow more sophisticated, optimising them becomes increasingly challenging. In this work, we present a reinforcement learning (RL) framework for discovering optimal execution strategies, evaluated within a reactive agent-based market simulator. This simulator creates reactive order flow and allows us to decompose slippage into its constituent components: market impact and execution risk. We assess the RL agent's performance using the efficient frontier based on work by Almgren and Chriss, measuring its ability to balance risk and cost. Results show that the RL-derived strategies consistently outperform baselines and operate near the efficient frontier, demonstrating a strong ability to optimise for risk and impact. These findings highlight the potential of reinforcement learning as a powerful tool in the trader's toolkit.


Limit Order Book Simulation and Trade Evaluation with $K$-Nearest-Neighbor Resampling

Giegrich, Michael, Oomen, Roel, Reisinger, Christoph

arXiv.org Machine Learning

In this paper, we show how $K$-nearest neighbor ($K$-NN) resampling, an off-policy evaluation method proposed in \cite{giegrich2023k}, can be applied to simulate limit order book (LOB) markets and how it can be used to evaluate and calibrate trading strategies. Using historical LOB data, we demonstrate that our simulation method is capable of recreating realistic LOB dynamics and that synthetic trading within the simulation leads to a market impact in line with the corresponding literature. Compared to other statistical LOB simulation methods, our algorithm has theoretical convergence guarantees under general conditions, does not require optimization, is easy to implement and computationally efficient. Furthermore, we show that in a benchmark comparison our method outperforms a deep learning-based algorithm for several key statistics. In the context of a LOB with pro-rata type matching, we demonstrate how our algorithm can calibrate the size of limit orders for a liquidation strategy. Finally, we describe how $K$-NN resampling can be modified for choices of higher dimensional state spaces.


Reinforcement Learning in Agent-Based Market Simulation: Unveiling Realistic Stylized Facts and Behavior

Yao, Zhiyuan, Li, Zheng, Thomas, Matthew, Florescu, Ionut

arXiv.org Artificial Intelligence

Investors and regulators can greatly benefit from a realistic market simulator that enables them to anticipate the consequences of their decisions in real markets. However, traditional rule-based market simulators often fall short in accurately capturing the dynamic behavior of market participants, particularly in response to external market impact events or changes in the behavior of other participants. In this study, we explore an agent-based simulation framework employing reinforcement learning (RL) agents. We present the implementation details of these RL agents and demonstrate that the simulated market exhibits realistic stylized facts observed in real-world markets. Furthermore, we investigate the behavior of RL agents when confronted with external market impacts, such as a flash crash. Our findings shed light on the effectiveness and adaptability of RL-based agents within the simulation, offering insights into their response to significant market events.


Adaptive Market Making via Online Learning

Neural Information Processing Systems

We consider the design of strategies for market making in an exchange. A market maker generally seeks to profit from the difference between the buy and sell price of an asset, yet the market maker also takes exposure risk in the event of large price movements. Profit guarantees for market making strategies have typically required certain stochastic assumptions on the price fluctuations of the asset in question; for example, assuming a model in which the price process is mean reverting. We propose a class of "spread-based" market making strategies whose performance can be controlled even under worst-case (adversarial) settings. We prove structural properties of these strategies which allows us to design a master algorithm which obtains low regret relative to the best such strategy in hindsight. We run a set of experiments showing favorable performance on recent real-world stock price data.


Asynchronous Deep Double Duelling Q-Learning for Trading-Signal Execution in Limit Order Book Markets

Nagy, Peer, Calliess, Jan-Peter, Zohren, Stefan

arXiv.org Artificial Intelligence

We employ deep reinforcement learning (RL) to train an agent to successfully translate a high-frequency trading signal into a trading strategy that places individual limit orders. Based on the ABIDES limit order book simulator, we build a reinforcement learning OpenAI gym environment and utilise it to simulate a realistic trading environment for NASDAQ equities based on historic order book messages. To train a trading agent that learns to maximise its trading return in this environment, we use Deep Duelling Double Q-learning with the APEX (asynchronous prioritised experience replay) architecture. The agent observes the current limit order book state, its recent history, and a short-term directional forecast. To investigate the performance of RL for adaptive trading independently from a concrete forecasting algorithm, we study the performance of our approach utilising synthetic alpha signals obtained by perturbing forward-looking returns with varying levels of noise. Here, we find that the RL agent learns an effective trading strategy for inventory management and order placing that outperforms a heuristic benchmark trading strategy having access to the same signal.


IMM: An Imitative Reinforcement Learning Approach with Predictive Representation Learning for Automatic Market Making

Niu, Hui, Li, Siyuan, Zheng, Jiahao, Lin, Zhouchi, Li, Jian, Guo, Jian, An, Bo

arXiv.org Artificial Intelligence

Market making (MM) has attracted significant attention in financial trading owing to its essential function in ensuring market liquidity. With strong capabilities in sequential decision-making, Reinforcement Learning (RL) technology has achieved remarkable success in quantitative trading. Nonetheless, most existing RL-based MM methods focus on optimizing single-price level strategies which fail at frequent order cancellations and loss of queue priority. Strategies involving multiple price levels align better with actual trading scenarios. However, given the complexity that multi-price level strategies involves a comprehensive trading action space, the challenge of effectively training profitable RL agents for MM persists. Inspired by the efficient workflow of professional human market makers, we propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions to develop multi-price level MM strategies efficiently. The framework start with introducing effective state and action representations adept at encoding information about multi-price level orders. Furthermore, IMM integrates a representation learning unit capable of capturing both short- and long-term market trends to mitigate adverse selection risk. Subsequently, IMM formulates an expert strategy based on signals and trains the agent through the integration of RL and imitation learning techniques, leading to efficient learning. Extensive experimental results on four real-world market datasets demonstrate that IMM outperforms current RL-based market making strategies in terms of several financial criteria. The findings of the ablation study substantiate the effectiveness of the model components.


Liquidity takers behavior representation through a contrastive learning approach

Ruan, Ruihua, Bacry, Emmanuel, Muzy, Jean-François

arXiv.org Artificial Intelligence

Deep learning has achieved great success in recent years, mainly due to advances in machine learning algorithms and computer hardware. As a result, it has become an indispensable tool in a wide range of fields, both in research and in practical applications. Specifically, in finance, deep learning has been applied extensively to predict stock prices movements using limit order book data. This technique is particularly effective in handling complex data which statistical models often struggle to manage. Notable works in the recent literature include [34, 26, 25, 33]. In particular, contrastive learning (CL) is a powerful technique in deep learning that has led to significant advances in representation learning.


Conditional Generators for Limit Order Book Environments: Explainability, Challenges, and Robustness

Coletta, Andrea, Jerome, Joseph, Savani, Rahul, Vyetrenko, Svitlana

arXiv.org Artificial Intelligence

LOBs [22] are a fundamental market mechanism, which are used across a significant proportion of financial markets, including all major stock and derivatives exchanges. The benefits of having robust and realistic simulators for these markets are numerous. For example, they would allow the study of markets under different assumptions, and the investigation of AI techniques for training trading strategies. In a LOB market, matched orders result in trades and unmatched orders are stored in the two parts of the LOB, a collection of buy orders called bids (the bid book), and a collection of sell orders called asks (the ask book). Typically, each side of the LOB will contains hundreds of individual orders, and a real market would be updated at micro-second time resolution, driven by a wide range of market participants and facilitated by "high-frequency" market makers [45]. The development of AI-based automated trading strategies for LOB markets has been a growth area in recent years, both within academia and industry, spurred on in part by developments in deep learning and reinforcement learning. Two typical LOB trading problems that have been investigated are market making, where the goal is to provide liquidity to the market by being continually willing to buy and sell an asset (see, e.g., Spooner et al. [50], Jerome et al. [28], Gasperov and Kostanjcar 1